The problem of low complexity, close to optimal, channel decoding of linearcodes with short to moderate block length is considered. It is shown that deeplearning methods can be used to improve a standard belief propagation decoder,despite the large example space. Similar improvements are obtained for themin-sum algorithm. It is also shown that tying the parameters of the decodersacross iterations, so as to form a recurrent neural network architecture, canbe implemented with comparable results. The advantage is that significantlyless parameters are required. We also introduce a recurrent neural decoderarchitecture based on the method of successive relaxation. Improvements overstandard belief propagation are also observed on sparser Tanner graphrepresentations of the codes. Furthermore, we demonstrate that the neuralbelief propagation decoder can be used to improve the performance, oralternatively reduce the computational complexity, of a close to optimaldecoder of short BCH codes.
展开▼